In [ ]:
epochs = 50

Part 7 - Make we do Federated Learning with FederatedDataset

We go introduce new tool if you wan dey use federated datasets. We don create FederatedDataset class wey we use as we dey use PyTorch Dataset class, we go give federated dataloader FederatedDataLoader wey go come repeat am in a federated fashion.

Person wey write am:

Person wey translate am:

We go use sandbox wey we discover last lesson


In [ ]:
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)

Oya find dataset


In [ ]:
boston_data = grid.search("#boston", "#data")
boston_target = grid.search("#boston", "#target")

We go load a model and an optimizer


In [ ]:
n_features = boston_data['alice'][0].shape[1]
n_targets = 1

model = th.nn.Linear(n_features, n_targets)

We go cast the date wey we get for FederatedDataset. See the workers wey hold part of the data.


In [ ]:
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
    dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
    datasets.append(dataset)

# Make we build FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
    optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)

We go put am for FederatedDataLoader and options go dey specify


In [ ]:
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)

Last last we go iterate over epochs. You see sey he dey similar to how we dey tarin pure and local PyTorch training!


In [ ]:
for epoch in range(1, epochs + 1):
    loss_accum = 0
    for batch_idx, (data, target) in enumerate(train_loader):
        model.send(data.location)
        
        optimizer = optimizers[data.location.id]
        optimizer.zero_grad()
        pred = model(data)
        loss = ((pred.view(-1) - target)**2).mean()
        loss.backward()
        optimizer.step()
        
        model.get()
        loss = loss.get()
        
        loss_accum += float(loss)
        
        if batch_idx % 8 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
                epoch, batch_idx, len(train_loader),
                       100. * batch_idx / len(train_loader), loss.item()))            
            
    print('Total loss', loss_accum)

Congratulations!!! - Oya Join the Community!

Clap for una sef as you don finish this notebook tutorial! If you enjoy am and you wan join the movement towards privacy preserving, decentralized ownership of AI and the AI supply chain (data), follow the steps wey dey below.

Star PySyft on GitHub

The easiset way to helep our community na to star the GitHub repos! This go helep raise awareness of the tools we dey build.

Join our Slack!

To follow up bumper to bumper on how latest advancements, join our community! You can do so by filling out the form at http://slack.openmined.org

Join a Code Project!

The best way to contribute to our community na to become code contributor! You fit go to PySyft GitHub Issues page and filter for "Projects". E go show you all the top level Tickets giving an overview of what projects you fit join! If you no wan join any project, but you wan code small, you fit look for more "one off" mini-projects by searching for GitHub issues marked "good first issue"

If you no get time to contribute to our codebase, but still like to lend support, you fit be a Backer on our Open Collective. All donations wey we get na for our web hosting and other community expenses such as hackathons and meetups! meetups!

OpenMined's Open Collective Page


In [ ]: